Search results for "Law of large numbers"

showing 10 items of 10 documents

A True Extension of the Markov Inequality to Negative Random Variables

2020

The Markov inequality is a classical nice result in statistics that serves to demonstrate other important results as the Chebyshev inequality and the weak law of large numbers, and that has useful applications in the real world, when the random variable is unspecified, to know an upper bound for the probability that an variable differs from its expectation. However, the Markov inequality has one main flaw: its validity is limited to nonnegative random variables. In the very short note, we propose an extension of the Markov inequality to any non specified random variable. This result is completely new.

Chebyshev's inequalityLaw of large numbersComputingMethodologies_SYMBOLICANDALGEBRAICMANIPULATIONMarkov's inequalityMathematicsofComputing_NUMERICALANALYSISApplied mathematicsExtension (predicate logic)Random variableUpper and lower boundsMathematicsVariable (mathematics)SSRN Electronic Journal
researchProduct

Scaling properties of topologically random channel networks

1996

Abstract The analysis deals with the scaling properties of infinite topologically random channel networks (ITRNs) fast introduced by Shreve (1967, J. Geol. , 75: 179–186) to model the branching structure of rivers as a random process. The expected configuration of ITRNs displays scaling behaviour only asymptotically, when the ruler (or ‘yardstick’) length is reduced to a very small extent. The random model can also reproduce scaling behaviour at larger ruler lengths if network magnitude and diameter are functionally related according to a reported deterministic rule. This indicates that subsets of rrRNs can be scaling and, although rrRNs are asymptotically plane-filling due to the law of la…

Discrete mathematicsDimension (vector space)YardstickLaw of large numbersStochastic processStructure (category theory)Magnitude (mathematics)Statistical physicsScalingWater Science and TechnologyMathematicsCommunication channelJournal of Hydrology
researchProduct

Law of the Iterated Logarithm

2020

For sums of independent random variables we already know two limit theorems: the law of large numbers and the central limit theorem. The law of large numbers describes for large \(n\in \mathbb{N}\) the typical behavior, or average value behavior, of sums of n random variables. On the other hand, the central limit theorem quantifies the typical fluctuations about this average value.

Discrete mathematicsIterated logarithmNatural logarithm of 2LogarithmLaw of large numbersLaw of the iterated logarithmLimit (mathematics)Random variableMathematicsCentral limit theorem
researchProduct

The “Gentle Law” of Large Numbers: Stifter’s Urban Meteorology

2020

GeographyMeteorologyLaw of large numbersMonatshefte
researchProduct

Moments and Laws of Large Numbers

2020

The most important characteristic quantities of random variables are the median, expectation and variance. For large n, the expectation describes the typical approximate value of the arithmetic mean (X 1+…+X n )/n of independent and identically distributed random variables (law of large numbers).

Independent and identically distributed random variablesShort CodeVariance (accounting)law.inventionlawLaw of large numbersStatisticsGeiger counterValue (mathematics)Random variablecomputerArithmetic meancomputer.programming_languageMathematics
researchProduct

The Vlasov Limit for a System of Particles which Interact with a Wave Field

2008

In two recent publications [Commun. PDE, vol.22, p.307--335 (1997), Commun. Math. Phys., vol.203, p.1--19 (1999)], A. Komech, M. Kunze and H. Spohn studied the joint dynamics of a classical point particle and a wave type generalization of the Newtonian gravity potential, coupled in a regularized way. In the present paper the many-body dynamics of this model is studied. The Vlasov continuum limit is obtained in form equivalent to a weak law of large numbers. We also establish a central limit theorem for the fluctuations around this limit.

PhysicsContinuum (measurement)Point particle010102 general mathematicsStatistical and Nonlinear Physics16. Peace & justice01 natural sciencesvlasov limitLaw of large numbers[NLIN.NLIN-CD]Nonlinear Sciences [physics]/Chaotic Dynamics [nlin.CD]0103 physical sciencesNewtonian fluid010307 mathematical physics0101 mathematicsComputingMilieux_MISCELLANEOUSMathematical PhysicsMathematical physicsCentral limit theoremCommunications in Mathematical Physics
researchProduct

Can the Adaptive Metropolis Algorithm Collapse Without the Covariance Lower Bound?

2011

The Adaptive Metropolis (AM) algorithm is based on the symmetric random-walk Metropolis algorithm. The proposal distribution has the following time-dependent covariance matrix at step $n+1$ \[ S_n = Cov(X_1,...,X_n) + \epsilon I, \] that is, the sample covariance matrix of the history of the chain plus a (small) constant $\epsilon>0$ multiple of the identity matrix $I$. The lower bound on the eigenvalues of $S_n$ induced by the factor $\epsilon I$ is theoretically convenient, but practically cumbersome, as a good value for the parameter $\epsilon$ may not always be easy to choose. This article considers variants of the AM algorithm that do not explicitly bound the eigenvalues of $S_n$ away …

Statistics and ProbabilityFOS: Computer and information sciencesIdentity matrixMathematics - Statistics TheoryStatistics Theory (math.ST)Upper and lower boundsStatistics - Computation93E3593E15Combinatorics60J27Mathematics::ProbabilityLaw of large numbers65C40 60J27 93E15 93E35stochastic approximationFOS: MathematicsEigenvalues and eigenvectorsComputation (stat.CO)Metropolis algorithmMathematicsProbability (math.PR)Zero (complex analysis)CovariancestabilityUniform continuityBounded function65C40Statistics Probability and Uncertaintyadaptive Markov chain Monte CarloMathematics - Probability
researchProduct

Estimating the geometric median in Hilbert spaces with stochastic gradient algorithms: Lp and almost sure rates of convergence

2016

The geometric median, also called L 1 -median, is often used in robust statistics. Moreover, it is more and more usual to deal with large samples taking values in high dimensional spaces. In this context, a fast recursive estimator has been introduced by Cardot et?al. (2013). This work aims at studying more precisely the asymptotic behavior of the estimators of the geometric median based on such non linear stochastic gradient algorithms. The L p rates of convergence as well as almost sure rates of convergence of these estimators are derived in general separable Hilbert spaces. Moreover, the optimal rates of convergence in quadratic mean of the averaged algorithm are also given.

Statistics and ProbabilityNumerical AnalysisRobust statisticsHilbert spaceEstimatorContext (language use)010103 numerical & computational mathematicsGeometric median01 natural sciencesSeparable space010104 statistics & probabilitysymbols.namesakeLaw of large numbersConvergence (routing)symbols0101 mathematicsStatistics Probability and UncertaintyAlgorithmMathematicsJournal of Multivariate Analysis
researchProduct

An Adaptive Parallel Tempering Algorithm

2013

Parallel tempering is a generic Markov chainMonteCarlo samplingmethod which allows good mixing with multimodal target distributions, where conventionalMetropolis- Hastings algorithms often fail. The mixing properties of the sampler depend strongly on the choice of tuning parameters, such as the temperature schedule and the proposal distribution used for local exploration. We propose an adaptive algorithm with fixed number of temperatures which tunes both the temperature schedule and the parameters of the random-walk Metropolis kernel automatically. We prove the convergence of the adaptation and a strong law of large numbers for the algorithm under general conditions. We also prove as a side…

Statistics and ProbabilityScheduleMathematical optimizationta112Adaptive algorithmErgodicityta111Mixing (mathematics)Law of large numbersKernel (statistics)Convergence (routing)Discrete Mathematics and CombinatoricsParallel temperingStatistics Probability and UncertaintyAlgorithmMathematicsJournal of Computational and Graphical Statistics
researchProduct

On the stability and ergodicity of adaptive scaling Metropolis algorithms

2011

The stability and ergodicity properties of two adaptive random walk Metropolis algorithms are considered. The both algorithms adjust the scaling of the proposal distribution continuously based on the observed acceptance probability. Unlike the previously proposed forms of the algorithms, the adapted scaling parameter is not constrained within a predefined compact interval. The first algorithm is based on scale adaptation only, while the second one incorporates also covariance adaptation. A strong law of large numbers is shown to hold assuming that the target density is smooth enough and has either compact support or super-exponentially decaying tails.

Statistics and ProbabilityStochastic approximationMathematics - Statistics TheoryStatistics Theory (math.ST)Law of large numbersMultiple-try Metropolis01 natural sciencesStability (probability)010104 statistics & probabilityModelling and Simulation65C40 60J27 93E15 93E35Adaptive Markov chain Monte CarloFOS: Mathematics0101 mathematicsScalingMetropolis algorithmMathematicsta112Applied Mathematics010102 general mathematicsRejection samplingErgodicityProbability (math.PR)ta111CovarianceRandom walkMetropolis–Hastings algorithmModeling and SimulationAlgorithmStabilityMathematics - ProbabilityStochastic Processes and their Applications
researchProduct